翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Fano inequality : ウィキペディア英語版
Fano's inequality
In information theory, Fano's inequality (also known as the Fano converse and the Fano lemma) relates the average information lost in a noisy channel to the probability of the categorization error. It was derived by Robert Fano in the early 1950s while teaching a Ph.D. seminar in information theory at MIT, and later recorded in his 1961 textbook.
It is used to find a lower bound on the error probability of any decoder as well as the lower bounds for minimax risks in density estimation.
Let the random variables ''X'' and ''Y'' represent input and output messages with a joint probability P(x,y). Let ''e'' represent an occurrence of error; i.e., that X\neq \tilde, being \tilde=f(Y) a noise approximate version of X. Fano's inequality is
:H(X|Y)\leq H(e)+P(e)\log(|\mathcal|-1),
where \mathcal denotes the support of ''X'',
:H\left(X|Y\right)=-\sum_ P(x_i,y_j)\log P\left(x_i|y_j\right)
is the conditional entropy,
:P(e)=P(X\neq \tilde)
is the probability of the communication error, and
:H(e)=-P(e)\log P(e)-(1-P(e))\log(1-P(e))
is the corresponding binary entropy.
==Alternative formulation==
Let ''X'' be a random variable with density equal to one of r+1 possible densities f_1,\ldots,f_. Furthermore, the Kullback–Leibler divergence between any pair of densities cannot be too large,
: D_(f_i\|f_j)\leq \beta for all i\not = j.
Let \psi(X)\in\ be an estimate of the index. Then
:\sup_i P_i(\psi(X)\not = i) \geq 1-\frac
where P_i is the probability induced by f_i

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Fano's inequality」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.